Reading
Amazon's New Robots Are Rolling Out an Automation Revolution
In a giant warehouse in Reading, Massachusetts, I meet a pair of robots that look like goofy green footstools from the future. Their round eyes and satisfied grins are rendered with light emitting diodes. They sport small lidar sensors like tiny hats that scan nearby objects and people in 3D. Suddenly, one of them plays a chipper little tune, its mouth starts flashing, and its eyes morph into heart shapes. This means, I am told, that the robot is happy.
Drugmaker to Test Machine Learning to Prevent Drug Shortages
To sharpen its predictions, the company's health-care division plans to start testing a cloud-based software platform later this year. The platform, made by North Reading, Mass.-based TraceLink Inc., can analyze in real time data points from various organizations within Merck's supply chain, including pharmacies, hospitals and wholesale distributors. TraceLink is now developing machine-learning algorithms that will be used in the pilot, which will begin with immuno-oncology drugs, designed to boost the body's immune system to fight cancer. "We want to start it in an area where the product is a lifesaving product," said Alessandro DeLuca, chief information officer for Merck's health-care division. "The value is going to be that every single patient will receive the drug that he or she needs at the right moment," Mr. DeLuca said, adding that the move could significantly cut drug shortages.
Definitively Identifying an Inherent Limitation to Actual Cognition
A century ago, discoveries of a serious kind of logical error made separately by several leading mathematicians led to acceptance of a sharply enhanced standard for rigor within what ultimately became the foundation for Computer Science. By 1931, Godel had obtained a definitive and remarkable result: an inherent limitation to that foundation. The resulting limitation is not applicable to actual human cognition, to even the smallest extent, unless both of these extremely brittle assumptions hold: humans are infallible reasoners and reason solely via formal inference rules. Both assumptions are contradicted by empirical data from well-known Cognitive Science experiments. This article investigates how a novel multi-part methodology recasts computability theory within Computer Science to obtain a definitive limitation whose application to human cognition avoids assumptions contradicting empirical data. The limitation applies to individual humans, to finite sets of humans, and more generally to any real-world entity.
Multi-Label Learning on Tensor Product Graph
Jiang, Jonathan (City University of Hong Kong)
A large family of graph-based semi-supervised algorithms have been developed intuitively and pragmatically for the multi-label learning problem. These methods, however, only implicitly exploited the label correlation, as either part of graph weight or an additional constraint, to improve overall classification performance. Despite their seemingly quite different formulations, we show that all existing approaches can be uniformly referred to as a Label Propagation (LP) or Random Walk with Restart (RWR) on a Cartesian Product Graph (CPG). Inspired by this discovery, we introduce a new framework for multi-label classification task, employing the Tensor Product Graph (TPG) — the tensor product of the data graph with the class (label) graph — in which not only the intra-class but also the inter-class associations are explicitly represented as weighted edges among graph vertices. In stead of computing directly on TPG, we derive an iterative algorithm, which is guaranteed to converge and with the same computational complexity and the same amount of storage as the standard label propagation on the original data graph. Applications to four benchmark multi-label data sets illustrate that our method outperforms several state-of-the-art approaches.
Qualitative Spatial Reasoning Extracting and Reasoning with Spatial Aggregates
Bailey-Kellogg, Chris, Zhao, Feng
Reasoning about spatial data is a key task in many applications, including geographic information systems, meteorological and fluid-flow analysis, computer-aided design, and protein structure databases. Such applications often require the identifi- cation and manipulation of qualitative spatial representations, for example, to detect whether one object will soon occlude another in a digital image or efficiently determine relationships between a proposed road and wetland regions in a geographic data set. Qualitative spatial reasoning (QSR) provides representational primitives (a spatial "vocabulary") and inference mechanisms for these tasks. This article first reviews representative work on QSR for data-poor scenarios, where the goal is to design representations that can answer qualitative queries without much numeric information. It then turns to the data-rich case, where the goal is to derive and manipulate qualitative spatial representations that efficiently and correctly abstract important spatial aspects of the underlying data for use in subsequent tasks. This article focuses on how a particular QSR system, SPATIAL AGGREGATION, can help answer spatial queries for scientific and engineering data sets. A case study application of weather analysis illustrates the effective representation and reasoning supported by both data-poor and data-rich forms of QSR
Review of Knowledge-Based Design Systems
The design constructs about the functional aspects of these can be no more general than the Reviewed by Amit Mukerjee prototypes. A harbinger of actions, information that can then be learning and vocabulary inadequacy) change is perhaps the book Knowledge-Based used to refine or adapt the prototype may be why the authors turn to analog Design Systems by R. D. to meet the design goals. Coyne, M. A. Rosenman, A. D. Radford, problem is then reduced to the problem Where the book falls short is in M. Balachandran, and J. S. Gero of searching through these possible illustrating the difference between (Addison Wesley, Reading, Mass., control actions to identify a the design task and other traditional 1990, 567 pages): It presents the sequence that will result in the desired Much of the discussion concentrates view because the volume is based on techniques are used in this process. Some of the other problems encountered here will also planning-type search through a space issues that one would have thought be different. Indeed, it seems in vision, planning, learning, and so resulting in conflicting criteria that clear that a large number of design on.
Artificial Intelligence: A Rand Perspective
Klahr, Philip, Waterman, Donald A.
THE AI MAGAZINE Summer, 1986 55 building one of the first stored-program digital computers, AI also had its share of controversy, however, at Rand the JOHNNIAC (see Figure 1) (Gruenberger, 1968);l and elsewhere. Given its quick rise to popularity and its George Dantzig and his associates were inventing linear ambitious predictions (Simon & Newell, 1958), AI soon programming (Dantzig, 1963); Les Ford and Ray Fulkerson had its critics, and one of the most prominent, Hubert were developing techniques for network flow analysis Dreyfus, published his famous critique of AI (Dreyfus, (Ford & Fulkerson, 1962); Richard Bellman was developing 1965) while he was consulting at Rand. In addition, the his ideas on dynamic programming (Bellman, 1953); early promise of automatic machine translation of text Herman Kahn was advancing techniques for Monte Carlo from one language to another (the emphasis at Rand was simulation (Kahn, 1955); Lloyd Shapley was revolutionizing on translation from Russian to English) produced only game theory (Shapley, 1951-1960); Stephen Kleene was modest systems, and the goal of fully automated machine advancing our understanding of finite automata (Kleene, translation was abandoned in the early 1960s.
Tenth Annual Workshop on Artificial Intelligence in Medicine: An Overview
Chandrasekaran, B., Smith, Jack W.
The Artificial Intelligence in Medicine (AIM) Workshop has become a tradition. Meeting every year for the past nine years, it has been the forum where all the issues from basic research through applications to implementations have been discussed; it has also become a community building activity, bringing together researchers, medical practitioners, and government and industry sponsors of AIM activities. The AIM Workshop held at Fawcett Center for Tomorrow at Ohio State University, June 30 - July 3, 1984, was no exception. It brought together more than 100 active participants in AIM.
Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project
Buchanan, Bruce G., Shortliffe, Edward H.
Artificial intelligence, or AI, is largely an experimental science—at least as much progress has been made by building and analyzing programs as by examining theoretical questions. MYCIN is one of several well-known programs that embody some intelligence and provide data on the extent to which intelligent behavior can be programmed. As with other AI programs, its development was slow and not always in a forward direction. But we feel we learned some useful lessons in the course of nearly a decade of work on MYCIN and related programs. In this book we share the results of many experiments performed in that time, and we try to paint a coherent picture of the work. The book is intended to be a critical analysis of several pieces of related research, performed by a large number of scientists. We believe that the whole field of AI will benefit from such attempts to take a detailed retrospective look at experiments, for in this way the scientific foundations of the field will gradually be defined. It is for all these reasons that we have prepared this analysis of the MYCIN experiments.
The complete book in a single file.
Artificial Intelligence Prepares for 2001
Artificial Intelligence, as a maturing scientific/engineering discipline, is beginning to find its niche among the variety of subjects that are relevant to intelligent, perceptive behavior. A view of AI is presented that is based on a declarative representation of knowledge with semantic attachments to problem-specific procedures and data structures. Several important challenges to this view are briefly discussed. It is argued that research in the field would be stimulated by a project to develop a computer individual that would have a continuing existence in time.